5 research outputs found

    Cortical and subcortical speech-evoked responses in young and older adults: Effects of background noise, arousal states, and neural excitability

    Get PDF
    This thesis investigated how the brain processes speech signals in human adults across a wide age-range in the sensory auditory systems using electroencephalography (EEG). Two types of speech-evoked phase-locked responses were focused on: (i) cortical responses (theta-band phase-locked responses) that reflect processing of low-frequency slowly-varying envelopes of speech; (ii) subcortical/peripheral responses (frequency-following responses; FFRs) that reflect encoding of speech periodicity and temporal fine structure information. The aims are to elucidate how these neural activities are affected by different internal (aging, hearing loss, level of arousal and neural excitability) and external (background noise) factors during our daily life through three studies. Study 1 investigated theta-band phase-locking and FFRs in noisy environments in young and older adults. It investigated how aging and hearing loss affect these activities under quiet and noisy environments, and how these activities are associated with speech-in-noise perception. The results showed that ageing and hearing loss affect speech-evoked phase-locked responses through different mechanisms, and the effects of aging on cortical and subcortical activities take different roles in speech-in-noise perception. Study 2 investigated how level of arousal, or consciousness, affects phase-locked responses in young and older adults. The results showed that both theta-band phase-locking and FFRs decreases following decreases in the level of arousal. It was further found that neuro-regulatory role of sleep spindles on theta-band phase-locking is distinct between young and older adults, indicating that the mechanisms of neuro-regulation for phase-locked responses in different arousal states are age-dependent. Study 3 established a causal relationship between the auditory cortical excitability and FFRs using combined transcranial direct current stimulation (tDCS) and EEG. FFRs were measured before and after tDCS was applied over the auditory cortices. The results showed that changes in neural excitability of the right auditory cortex can alter FFR magnitudes along the contralateral pathway. This shows important theoretical and clinical implications that causally link functions of auditory cortex with neural encoding of speech periodicity. Taken together, findings of this thesis will advance our understanding of how speech signals are processed via neural phase-locking in our everyday life across the lifespan

    The possible role of early-stage phase-locked neural activities in speech-in-noise perception in human adults across age and hearing loss

    Get PDF
    Ageing affects auditory neural phase-locked activities which could increase the challenges experienced during speech-in-noise (SiN) perception by older adults. However, evidence for how ageing affects SiN perception through these phase-locked activities is still lacking. It is also unclear whether influences of ageing on phase-locked activities in response to different acoustic properties have similar or different mechanisms to affect SiN perception. The present study addressed these issues by measuring early-stage phase-locked encoding of speech under quiet and noisy backgrounds (speech-shaped noise (SSN) and multi-talker babbles) in adults across a wide age range (19-75 years old). Participants passively listened to a repeated vowel whilst the frequency-following response (FFR) to fundamental frequency that has primary subcortical sources and cortical phase-locked response to slowly-fluctuating acoustic envelopes were recorded. We studied how these activities are affected by age and age-related hearing loss and how they are related to SiN performances (word recognition in sentences in noise). First, we found that the effects of age and hearing loss differ for the FFR and slow-envelope phase-locking. FFR was significantly decreased with age and high-frequency (≥ 2 kHz) hearing loss but increased with low-frequency (< 2 kHz) hearing loss, whilst the slow-envelope phase-locking was significantly increased with age and hearing loss across frequencies. Second, potential relationships between the types of phase-locked activities and SiN perception performances were also different. We found that the FFR and slow-envelope phase-locking positively corresponded to SiN performance under multi-talker babbles and SSN, respectively. Finally, we investigated how age and hearing loss affected SiN perception through phase-locked activities via mediation analyses. We showed that both types of activities significantly mediated the relation between age/hearing loss and SiN perception but in distinct manners. Specifically, FFR decreased with age and high-frequency hearing loss which in turn contributed to poorer SiN performance but increased with low-frequency hearing loss which in turn contributed to better SiN performance under multi-talker babbles. Slow-envelope phase-locking increased with age and hearing loss which in turn contributed to better SiN performance under both SSN and multi-talker babbles. Taken together, the present study provided evidence for distinct neural mechanisms of early-stage auditory phase-locked encoding of different acoustic properties through which ageing affects SiN perception

    Distinct roles of delta- and theta-band neural tracking for sharpening and predictive coding of multi-level speech features during spoken language processing

    Get PDF
    The brain tracks and encodes multi‐level speech features during spoken language processing. It is evident that this speech tracking is dominant at low frequencies (&lt;8 Hz) including delta and theta bands. Recent research has demonstrated distinctions between delta‐ and theta‐band tracking but has not elucidated how they differentially encode speech across linguistic levels. Here, we hypothesised that delta‐band tracking encodes prediction errors (enhanced processing of unexpected features) while theta‐band tracking encodes neural sharpening (enhanced processing of expected features) when people perceive speech with different linguistic contents. EEG responses were recorded when normal‐hearing participants attended to continuous auditory stimuli that contained different phonological/morphological and semantic contents: (1) real‐words, (2) pseudo‐words and (3) time‐reversed speech. We employed multivariate temporal response functions to measure EEG reconstruction accuracies in response to acoustic (spectrogram), phonetic and phonemic features with the partialling procedure that singles out unique contributions of individual features. We found higher delta‐band accuracies for pseudo‐words than real‐words and time‐reversed speech, especially during encoding of phonetic features. Notably, individual time‐lag analyses showed that significantly higher accuracies for pseudo‐words than real‐words started at early processing stages for phonetic encoding (&lt;100 ms post‐feature) and later stages for acoustic and phonemic encoding (&gt;200 and 400 ms post‐feature, respectively). Theta‐band accuracies, on the other hand, were higher when stimuli had richer linguistic content (real‐words &gt; pseudo‐words &gt; time‐reversed speech). Such effects also started at early stages (&lt;100 ms post‐feature) during encoding of all individual features or when all features were combined. We argue these results indicate that delta‐band tracking may play a role in predictive coding leading to greater tracking of pseudo‐words due to the presence of unexpected/unpredicted semantic information, while theta‐band tracking encodes sharpened signals caused by more expected phonological/morphological and semantic contents. Early presence of these effects reflects rapid computations of sharpening and prediction errors. Moreover, by measuring changes in EEG alpha power, we did not find evidence that the observed effects can be solitarily explained by attentional demands or listening efforts. Finally, we used directed information analyses to illustrate feedforward and feedback information transfers between prediction errors and sharpening across linguistic levels, showcasing how our results fit with the hierarchical Predictive Coding framework. Together, we suggest the distinct roles of delta and theta neural tracking for sharpening and predictive coding of multi‐level speech features during spoken language processing

    Causal Relationship between the Right Auditory Cortex and Speech-Evoked Envelope-Following Response: Evidence from Combined Transcranial Stimulation and Electroencephalography

    Get PDF
    Speech-evoked envelope-following response (EFR) reflects brain encoding of speech periodicity that serves as a biomarker for pitch and speech perception and various auditory and language disorders. Although EFR is thought to originate from the subcortex, recent research illustrated a right-hemispheric cortical contribution to EFR. However, it is unclear whether this contribution is causal. This study aimed to establish this causality by combining transcranial direct current stimulation (tDCS) and measurement of EFR (pre- and post-tDCS) via scalp-recorded electroencephalography. We applied tDCS over the left and right auditory cortices in right-handed normal-hearing participants and examined whether altering cortical excitability via tDCS causes changes in EFR during monaural listening to speech syllables. We showed significant changes in EFR magnitude when tDCS was applied over the right auditory cortex compared with sham stimulation for the listening ear contralateral to the stimulation site. No such effect was found when tDCS was applied over the left auditory cortex. Crucially, we further observed a hemispheric laterality where aftereffect was significantly greater for tDCS applied over the right than the left auditory cortex in the contralateral ear condition. Our finding thus provides the first evidence that validates the causal relationship between the right auditory cortex and EFR
    corecore